kernel function
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- North America > United States > Colorado > Boulder County > Boulder (0.04)
- (3 more...)
But How Does It Work in Theory? Linear SVM with Random Features
Yitong Sun, Anna Gilbert, Ambuj Tewari
The random features method, proposed by Rahimi and Recht [2008], maps the data to a finite dimensional feature space as a random approximation to the feature space of RBF kernels. With explicit finite dimensional feature vectors available, the original KSVM is converted to a linear support vector machine (LSVM), that can be trained by faster algorithms (Shalev-Shwartz et al.
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.14)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > Spain > Canary Islands (0.04)
- Asia > Philippines > Luzon > National Capital Region > City of Manila (0.14)
- Asia > China > Beijing > Beijing (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (3 more...)
- Health & Medicine (0.49)
- Education (0.46)
- North America > United States > California > Alameda County > Berkeley (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California > Alameda County > Livermore (0.04)
- Asia (0.04)
Kernel functions based on triplet comparisons
Given only information in the form of similarity triplets Object A is more similar to object B than to object C about a data set, we propose two ways of defining a kernel function on the data set. While previous approaches construct a low-dimensional Euclidean embedding of the data set that reflects the given similarity triplets, we aim at defining kernel functions that correspond to high-dimensional embeddings. These kernel functions can subsequently be used to apply any kernel method to the data set.
Provably Efficient Reinforcement Learning with Kernel and Neural Function Approximations
Reinforcement learning (RL) algorithms combined with modern function approximators such as kernel functions and deep neural networks have achieved significant empirical successes in large-scale application problems with a massive number of states. From a theoretical perspective, however, RL with functional approximation poses a fundamental challenge to developing algorithms with provable computational and statistical efficiency, due to the need to take into consideration both the exploration-exploitation tradeoff that is inherent in RL and the bias-variance tradeoff that is innate in statistical estimation. To address such a challenge, focusing on the episodic setting where the action-value functions are represented by a kernel function or over-parametrized neural network, we propose the first provable RL algorithm with both polynomial runtime and sample complexity, without additional assumptions on the data-generating model. In particular, for both the kernel and neural settings, we prove that an optimistic modification of the least-squares value iteration algorithm incurs an $\tilde{\mathcal{O}}(\delta_{\cF} H^2 \sqrt{T})$ regret, where $\delta_{\cF}$ characterizes the intrinsic complexity of the function class $\cF$, $H$ is the length of each episode, and $T$ is the total number of episodes. Our regret bounds are independent of the number of states and therefore even allows it to diverge, which exhibits the benefit of function approximation.
Personalized Federated Learning With Gaussian Processes
Federated learning aims to learn a global model that performs well on client devices with limited cross-client communication. Personalized federated learning (PFL) further extends this setup to handle data heterogeneity between clients by learning personalized models. A key challenge in this setting is to learn effectively across clients even though each client has unique data that is often limited in size. Here we present pFedGP, a solution to PFL that is based on Gaussian processes (GPs) with deep kernel learning. GPs are highly expressive models that work well in the low data regime due to their Bayesian nature.However, applying GPs to PFL raises multiple challenges.
Efficient Level-Crossing Probability Calculation for Gaussian Process Modeled Data
Li, Haoyu, Michaud, Isaac J, Biswas, Ayan, Shen, Han-Wei
Almost all scientific data have uncertainties originating from different sources. Gaussian process regression (GPR) models are a natural way to model data with Gaussian-distributed uncertainties. GPR also has the benefit of reducing I/O bandwidth and storage requirements for large scientific simulations. However, the reconstruction from the GPR models suffers from high computation complexity. To make the situation worse, classic approaches for visualizing the data uncertainties, like probabilistic marching cubes, are also computationally very expensive, especially for data of high resolutions. In this paper, we accelerate the level-crossing probability calculation efficiency on GPR models by subdividing the data spatially into a hierarchical data structure and only reconstructing values adaptively in the regions that have a non-zero probability. For each region, leveraging the known GPR kernel and the saved data observations, we propose a novel approach to efficiently calculate an upper bound for the level-crossing probability inside the region and use this upper bound to make the subdivision and reconstruction decisions. We demonstrate that our value occurrence probability estimation is accurate with a low computation cost by experiments that calculate the level-crossing probability fields on different datasets.
- North America > United States > New Mexico > Los Alamos County > Los Alamos (0.04)
- North America > United States > Ohio (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > United Kingdom > North Sea > Southern North Sea (0.04)
- Energy (0.46)
- Government > Regional Government (0.46)
From Betti Numbers to Persistence Diagrams: A Hybrid Quantum Algorithm for Topological Data Analysis
Persistence diagrams serve as a core tool in topological data analysis, playing a crucial role in pathological monitoring, drug discovery, and materials design. However, existing quantum topological algorithms, such as the LGZ algorithm, can only efficiently compute summary statistics like Betti numbers, failing to provide persistence diagram information that tracks the lifecycle of individual topological features, severely limiting their practical value. This paper proposes a novel quantum-classical hybrid algorithm that achieves, for the first time, the leap from "quantum computation of Betti numbers" to "quantum acquisition of practical persistence diagrams." The algorithm leverages the LGZ quantum algorithm as an efficient feature extractor, mining the harmonic form eigenvectors of the combinatorial Laplacian as well as Betti numbers, constructing specialized topological kernel functions to train a quantum support vector machine (QSVM), and learning the mapping from quantum topological features to persistence diagrams. The core contributions of this algorithm are: (1) elevating quantum topological computation from statistical summaries to pattern recognition, greatly expanding its application value; (2) obtaining more practical topological information in the form of persistence diagrams for real-world applications while maintaining the exponential speedup advantage of quantum computation; (3) proposing a novel hybrid paradigm of "classical precision guiding quantum efficiency." This method provides a feasible pathway for the practical implementation of quantum topological data analysis.